Hi Rob, can you tell us more about you and your work at Eidos-Montréal?
Hello! I am a senior Audio Director at Eidos-Montréal! What do I do exactly? To put it simply, I am assigned to one of our triple-A game teams where I am responsible for everything that makes up the sound of the game – music, voice, sound and mix! I work with a fantastic audio team, and a super strong director team to craft the most exciting video games experiences in the industry!
What is spatial audio exactly?
It’s a very good question. Spatial audio is a very broad category being talked about across cinema, music and video games. For us, it is essentially any method by which the player can experience fully immersive 3D sound while playing a game. That can be either over headphones using binaural encoding technologies (such as Dolby Atmos for headphones, Microsoft’s Windows Sonic for Headphones, or Sony’s 3D Audio), or over speakers via either a home theatre AV Receiver with multiple surround speakers, or perhaps a soundbar set-up in your living room to reflect sound around your room.
As creators, we are specifically very excited by the clarity and transparency of all the new emerging binaural rendering technologies for headphones, as headphones are very much a natural and affordable way of experiencing games for most players – this allows us to deliver the same experience as someone would get from a full surround sound AV Receiver with multiple speakers set-up all around the room. So, now we have the ears of everyone in our 3D worlds, our work is even more focussed on really showing our audiences what we can do and making sure we continually amaze them.
How did this collaboration between Eidos-Montréal and McGill happen?
I met with some students from the McGill music programme at AES NY in 2018 while I was demoing the Dolby Atmos mix of Shadow of the Tomb Raider – we started talking between ourselves about sharing some knowledge (the location of our studios means that we are virtually neighbors!). As an extension of that, I got to know and became good friends with Professor Weislaw Woszczyk and we started hosting some of the McGill students at Eidos-Montréal to listen to and ask questions about some of our work in our mix room. Then, I think it was in late 2019, Weislaw let us know about a very prestigious and competitive funding opportunity that was open for proposals, so we immediately felt that a collaboration could be really exciting, beneficial and interesting in the field of creatively exploring 3D spatial music within a 3D game environment, having a piece of music be explorable by a player through space – for us at Eidos-Montréal, this was exactly the kind of thing that we would normally not have the time and resources set aside to explore in a day-to-day video game development schedule. To our delight, we were one of the few proposals that got the green light for funding, and here we are!
Can you tell us more about what have you been working on with the McGill University team?
We are creating a short playable demo highlighting 3D spatial music. This will be a mix of recording techniques like Higher Order Ambisonics (capturing multiple directions of the sound with 32 or more microphone capsules housed within a single microphone unit – similar to a 360 degree camera) and object-based sound cues, and will take into account the recording, composition, integration and mix of music within a small mysterious 3D game-like environment. The demo will be cross platform and freely available and we want as many people to experience it as possible!
We’re building an immersive, emotional space for players to explore. For this demo we are not interested in following the usual mode of ‘gameplay’ where a player needs to navigate a ‘realistic’ 3D environment and be aware of threats. To offer some background to this idea, I became extremely inspired by a short film I saw called ‘Upstream’ by Rob Petit and Robert Macfarlane – it is entirely shot in beautiful black and white with drone cameras and very ambiguous, but the soundtrack explores the poetry and sense of time and mystery that is baked into a natural environment (The River Dee in Scotland) – some of these elements of dream-like ambiguity, I felt, would really work for a setting in which spatial music was a predominant factor. When we are on the controller, we need to be in this almost dreamlike state in order to let the music and emotions surround and lead us, rather than having the player worrying about sound effects sources or threats from enemies offscreen as we might do in a more typical gameplay environment. So, the demo environment is very much about setting the scene, and then seeing where we can take the audience from there.
Another exciting thing that this project has enabled us to do is to work again and collaborate with the composer Brian D’Oliveira and his La Hacienda Creative team. Brian was a natural choice for us, having recently collaborated on Shadow of the Tomb Raider together, we always wanted to go much deeper into exploring 3D and spatial music and how it can function as a storytelling medium in terms of 3D spatial immersion – a role usually left to the sound design of a game. 3D music has inherent risks for storytelling and player immersion, as it can bring in unintentional levels of confusion for the player in terms of being able to navigate and understand an artificially constructed 3D space – especially if musical sounds feel like they could be ‘sound effects’ – so there are certain things we need to consider in the choice of instrumentation and textures for the score for it to feel comfortable and readable in 3D space.
My job is to help guide the team, listen to all the expert technical voices and set up the creative framework into which we can experiment, as well as delivering something playable and compelling when we are finished in December this year. So, yeah, it is a lot of fun.
What’s the most exciting part of your collaboration with McGill University?
So far, we have mapped out the layout of our 3D environment in the game engine, and we have recorded some initial prototype music tests in the very large MMR room at McGill. I’d say we are roughly about halfway through the process, and now we have many of the technologies and workflows up and running, and we’ve tested the limits of what is possible with multiple layered Ambisonic recordings so we can really play with how we implement and blend the recordings in our audio and game engines. We’re at the stage where we can have fun and try things out together – we’re into the iterative creative part where we get to play directly with what is onscreen. We’re discovering new things all the time.
It is very exciting and rewarding working with the team at McGill, they have a completely fresh perspective that they bring from the world of research, music recording, performance and mixing, and for us that is wonderful. It is a real treat to have a project that is 100% music-led and music focussed, meaning that all our creative, technological and storytelling energy is focussed on what emotions players feel and what they hear.